-
Notifications
You must be signed in to change notification settings - Fork 961
Support for DECODE operator #3213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@tensorflow/micro Add support for alternate decompression memory to DECODE operator. Additional unit tests. Update generic benchmark application and Makefile. bug=fixes tensorflow#3212
b582198 to
59589c6
Compare
veblush
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR; I have some minor feedback.
Regarding the decompression memory: How is the required size calculated, and do we have tooling for that? Also, when is it preferable to use multiple decompression regions rather than just one?
| TfLiteTensor* output = nullptr; | ||
| TfLiteStatus status = kTfLiteOk; | ||
|
|
||
| micro_context->ResetDecompressionMemoryAllocations(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean that there will be only one Decode node in the entire graph? It looks like that if there were two Decode nodes, the second one's Prepare call would wipe out the memory assignments planned by the first one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
<discussion moved to alternate venue>
| size_t tensor_output_index, | ||
| TfLiteTensor* output) { | ||
| if (output->data.data != nullptr) { | ||
| // If memory has already been assigned to the tensor, leave it be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious how this would happen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
<discussion moved to alternate venue>
@veblush The required size and number of regions are decided by the application. TFLM makes no assumptions about the processor memory. There is no tooling associated with alternate decompression memory. An API for the MicroInterpreter is supplied, and it is up to the application developer as to how they wish to use it. An example of how the API is used is available in the Generic Benchmark application, and its associated documentation. |
veblush
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! The remaining questions were resolved through offline discussion.
6c1c1a8 Support for DECODE operator (tensorflow#3213)… [email protected] 20.11.2025 12:14 AM
@tensorflow/micro
Add support for alternate decompression memory to DECODE operator. Additional unit tests.
Update generic benchmark application and Makefile.
bug=fixes #3212